58 research outputs found

    A database of whole-body action videos for the study of action, emotion, and untrustworthiness

    Get PDF
    We present a database of high-definition (HD) videos for the study of traits inferred from whole-body actions. Twenty-nine actors (19 female) were filmed performing different actions—walking, picking up a box, putting down a box, jumping, sitting down, and standing and acting—while conveying different traits, including four emotions (anger, fear, happiness, sadness), untrustworthiness, and neutral, where no specific trait was conveyed. For the actions conveying the four emotions and untrustworthiness, the actions were filmed multiple times, with the actor conveying the traits with different levels of intensity. In total, we made 2,783 action videos (in both two-dimensional and three-dimensional format), each lasting 7 s with a frame rate of 50 fps. All videos were filmed in a green-screen studio in order to isolate the action information from all contextual detail and to provide a flexible stimulus set for future use. In order to validate the traits conveyed by each action, we asked participants to rate each of the actions corresponding to the trait that the actor portrayed in the two-dimensional videos. To provide a useful database of stimuli of multiple actions conveying multiple traits, each video name contains information on the gender of the actor, the action executed, the trait conveyed, and the rating of its perceived intensity. All videos can be downloaded free at the following address: http://www-users.york.ac.uk/~neb506/databases.html. We discuss potential uses for the database in the analysis of the perception of whole-body actions

    Visual adaptation enhances action sound discrimination

    Get PDF
    Prolonged exposure, or adaptation, to a stimulus in one modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in one modality can bias perception in another modality. Here we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory or audiovisual hand actions enhanced discrimination between two subsequently presented hand action sounds. Discrimination was most enhanced when the visual action ‘matched’ the auditory action. In addition, prior adaptation to a visual, auditory or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation induced crossmodal enhancements cannot be explained by post-perceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli

    Modelling diverse root density dynamics and deep nitrogen uptake — a simple approach

    Get PDF
    We present a 2-D model for simulation of root density and plant nitrogen (N) uptake for crops grown in agricultural systems, based on a modification of the root density equation originally proposed by Gerwitz and Page in J Appl Ecol 11:773–781, (1974). A root system form parameter was introduced to describe the distribution of root length vertically and horizontally in the soil profile. The form parameter can vary from 0 where root density is evenly distributed through the soil profile, to 8 where practically all roots are found near the surface. The root model has other components describing root features, such as specific root length and plant N uptake kinetics. The same approach is used to distribute root length horizontally, allowing simulation of root growth and plant N uptake in row crops. The rooting depth penetration rate and depth distribution of root density were found to be the most important parameters controlling crop N uptake from deeper soil layers. The validity of the root distribution model was tested with field data for white cabbage, red beet, and leek. The model was able to simulate very different root distributions, but it was not able to simulate increasing root density with depth as seen in the experimental results for white cabbage. The model was able to simulate N depletion in different soil layers in two field studies. One included vegetable crops with very different rooting depths and the other compared effects of spring wheat and winter wheat. In both experiments variation in spring soil N availability and depth distribution was varied by the use of cover crops. This shows the model sensitivity to the form parameter value and the ability of the model to reproduce N depletion in soil layers. This work shows that the relatively simple root model developed, driven by degree days and simulated crop growth, can be used to simulate crop soil N uptake and depletion appropriately in low N input crop production systems, with a requirement of few measured parameters

    Dynamic Social Adaptation of Motion-Related Neurons in Primate Parietal Cortex

    Get PDF
    Social brain function, which allows us to adapt our behavior to social context, is poorly understood at the single-cell level due largely to technical limitations. But the questions involved are vital: How do neurons recognize and modulate their activity in response to social context? To probe the mechanisms involved, we developed a novel recording technique, called multi-dimensional recording, and applied it simultaneously in the left parietal cortices of two monkeys while they shared a common social space. When the monkeys sat near each other but did not interact, each monkey's parietal activity showed robust response preference to action by his own right arm and almost no response to action by the other's arm. But the preference was broken if social conflict emerged between the monkeys—specifically, if both were able to reach for the same food item placed on the table between them. Under these circumstances, parietal neurons started to show complex combinatorial responses to motion of self and other. Parietal cortex adapted its response properties in the social context by discarding and recruiting different neural populations. Our results suggest that parietal neurons can recognize social events in the environment linked with current social context and form part of a larger social brain network

    fMR-adaptation indicates selectivity to audiovisual content congruency in distributed clusters in human superior temporal cortex

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Efficient multisensory integration is of vital importance for adequate interaction with the environment. In addition to basic binding cues like temporal and spatial coherence, meaningful multisensory information is also bound together by content-based associations. Many functional Magnetic Resonance Imaging (fMRI) studies propose the (posterior) superior temporal cortex (STC) as the key structure for integrating meaningful multisensory information. However, a still unanswered question is how superior temporal cortex encodes content-based associations, especially in light of inconsistent results from studies comparing brain activation to semantically matching (congruent) versus nonmatching (incongruent) multisensory inputs. Here, we used fMR-adaptation (fMR-A) in order to circumvent potential problems with standard fMRI approaches, including spatial averaging and amplitude saturation confounds. We presented repetitions of audiovisual stimuli (letter-speech sound pairs) and manipulated the associative relation between the auditory and visual inputs (congruent/incongruent pairs). We predicted that if multisensory neuronal populations exist in STC and encode audiovisual content relatedness, adaptation should be affected by the manipulated audiovisual relation.</p> <p>Results</p> <p>The results revealed an occipital-temporal network that adapted independently of the audiovisual relation. Interestingly, several smaller clusters distributed over superior temporal cortex within that network, adapted stronger to congruent than to incongruent audiovisual repetitions, indicating sensitivity to content congruency.</p> <p>Conclusions</p> <p>These results suggest that the revealed clusters contain multisensory neuronal populations that encode content relatedness by selectively responding to congruent audiovisual inputs, since unisensory neuronal populations are assumed to be insensitive to the audiovisual relation. These findings extend our previously revealed mechanism for the integration of letters and speech sounds and demonstrate that fMR-A is sensitive to multisensory congruency effects that may not be revealed in BOLD amplitude per se.</p

    Heterochrony and Cross-Species Intersensory Matching by Infant Vervet Monkeys

    Get PDF
    Understanding the evolutionary origins of a phenotype requires understanding the relationship between ontogenetic and phylogenetic processes. Human infants have been shown to undergo a process of perceptual narrowing during their first year of life, whereby their intersensory ability to match the faces and voices of another species declines as they get older. We investigated the evolutionary origins of this behavioral phenotype by examining whether or not this developmental process occurs in non-human primates as well.We tested the ability of infant vervet monkeys (Cercopithecus aethiops), ranging in age from 23 to 65 weeks, to match the faces and voices of another non-human primate species (the rhesus monkey, Macaca mulatta). Even though the vervets had no prior exposure to rhesus monkey faces and vocalizations, our findings show that infant vervets can, in fact, recognize the correspondence between rhesus monkey faces and voices (but indicate that they do so by looking at the non-matching face for a greater proportion of overall looking time), and can do so well beyond the age of perceptual narrowing in human infants. Our results further suggest that the pattern of matching by vervet monkeys is influenced by the emotional saliency of the Face+Voice combination. That is, although they looked at the non-matching screen for Face+Voice combinations, they switched to looking at the matching screen when the Voice was replaced with a complex tone of equal duration. Furthermore, an analysis of pupillary responses revealed that their pupils showed greater dilation when looking at the matching natural face/voice combination versus the face/tone combination.Because the infant vervets in the current study exhibited cross-species intersensory matching far later in development than do human infants, our findings suggest either that intersensory perceptual narrowing does not occur in Old World monkeys or that it occurs later in development. We argue that these findings reflect the faster rate of neural development in monkeys relative to humans and the resulting differential interaction of this factor with the effects of early experience

    Monkeys and Humans Share a Common Computation for Face/Voice Integration

    Get PDF
    Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a “race” model failed to account for their behavior patterns. Conversely, a “superposition model”, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates
    corecore